154 research outputs found

    Deep Brain Stimulation, Authenticity and Value

    Get PDF
    In this paper, we engage in dialogue with Jonathan Pugh, Hannah Maslen, and Julian Savulescu about how to best interpret the potential impacts of deep brain stimulation on the self. We consider whether ordinary people’s convictions about the true self should be interpreted in essentialist or existentialist ways. Like Pugh et al., we argue that it is useful to understand the notion of the true self as having both essentialist and existentialist components. We also consider two ideas from existentialist philosophy – Jean-Paul Sartre and Simone de Beauvoir’s ideas about “bad faith” and “ambiguity” – to argue that there can be value to patients in regarding themselves as having a certain amount of freedom to choose what aspects of themselves should be considered representative of their true selves. Lastly, we consider the case of an anorexia nervosa-patient who shifts between conflicting mind-sets. We argue that mind-sets in which it is easier for the patient and his or her family to share values can plausibly be considered to be more representative of the patient’s true self, if this promotes a well-functioning relationship between the patient and the family. However, we also argue that families are well-advised to give patients room to figure out what such shared values mean to them, since it can be alienating for patients if they feel that others try to impose values on them from the outside

    The Quantified Relationship

    Get PDF
    The growth of self-tracking and personal surveillance has given rise to the Quantified Self movement. Members of this movement seek to enhance their personal well-being, productivity, and self-actualization through the tracking and gamification of personal data. The technologies that make this possible can also track and gamify aspects of our interpersonal, romantic relationships. Several authors have begun to challenge the ethical and normative implications of this development. In this article, we build upon this work to provide a detailed ethical analysis of the Quantified Relationship. We identify eight core objections to the QR and subject them to critical scrutiny. We argue that although critics raise legitimate concerns, there are ways in which tracking technologies can be used to support and facilitate good relationships. We thus adopt a stance of cautious openness toward this technology and advocate the development of a research agenda for the positive use of QR technologies

    Automation, Work and the Achievement Gap

    Get PDF
    Rapid advances in AI-based automation have led to a number of existential and economic concerns. In particular, as automating technologies develop enhanced competency they seem to threaten the values associated with meaningful work. In this article, we focus on one such value: the value of achievement. We argue that achievement is a key part of what makes work meaningful and that advances in AI and automation give rise to a number achievement gaps in the workplace. This could limit people’s ability to participate in meaningful forms of work. Achievement gaps are interesting, in part, because they are the inverse of the (negative) responsibility gaps already widely discussed in the literature on AI ethics. Having described and explained the problem of achievement gaps, the article concludes by identifying four possible policy responses to the problem

    A new control problem? Humanoid robots, artificial intelligence, and the value of control

    Get PDF
    The control problem related to robots and AI usually discussed is that we might lose control over advanced technologies. When authors like Nick Bostrom and Stuart Russell discuss this control problem, they write in a way that suggests that having as much control as possible is good while losing control is bad. In life in general, however, not all forms of control are unambiguously positive and unproblematic. Some forms—e.g. control over other persons—are ethically problematic. Other forms of control are positive, and perhaps even intrinsically good. For example, one form of control that many philosophers have argued is intrinsically good and a virtue is self-control. In this paper, I relate these questions about control and its value to different forms of robots and AI more generally. I argue that the more robots are made to resemble human beings, the more problematic it becomes—at least symbolically speaking—to want to exercise full control over these robots. After all, it is unethical for one human being to want to fully control another human being. Accordingly, it might be seen as problematic—viz. as representing something intrinsically bad—to want to create humanoid robots that we exercise complete control over. In contrast, if there are forms of AI such that control over them can be seen as a form of self-control, then this might be seen as a virtuous form of control. The “new control problem”, as I call it, is the question of under what circumstances retaining and exercising complete control over robots and AI is unambiguously ethically good

    Meaning and Anti-Meaning in Life and What Happens After We Die

    Get PDF
    The absence of meaningfulness in life is meaninglessness. But what is the polar opposite of meaningfulness? In recent and ongoing work together with Stephen Campbell and Marcello di Paola respectively, I have explored what we dub ‘anti-meaning’: the negative counterpart of positive meaning in life. Here, I relate this idea of ‘anti-meaningful’ actions, activities, and projects to the topic of death, and in particular the deaths or suffering of those who will live after our own deaths. Connecting this idea of anti-meaning and what happens after our own deaths to recent work by Samuel Scheffler on what he calls ‘the collective afterlife’ and his four reasons to care about future generations, I argue that if we today make choices or have lifestyles that later lead to unnecessarily early deaths and otherwise avoidable suffering of people who will live after we have died, this robs our current choices and lifestyles of some of their meaning, perhaps even making them the opposite of meaningful in the long run

    On the Universal Law and Humanity Formulas.

    Full text link
    This dissertation is a philosophical commentary on the Prussian Enlightenment philosopher Immanuel Kant’s “Universal Law” and “Humanity” formulations of the categorical imperative (i.e. the most basic principle of morality or virtuousness). The former says to choose one’s basic guiding principles (or “maxims”) on the basis of their fitness to serve as universal laws, the latter to always treat the humanity in each person as an end, and never as a means only. Commentators and critics have been puzzled by Kant’s claim that these are two alternative statements of the same basic law, and have raised various objections to Kant’s suggestion that these are the most basic formulas of a fully justified human morality. This dissertation offers new readings of these two formulas, shows how, on these readings, the formulas do indeed turn out being alternative statements of the same basic moral law, and in the process responds to many of the standard objections raised against Kant’s theory. Its first chapter briefly explores the ways in which Kant draws on his philosophical predecessors such as Plato (and especially Plato’s Republic) and Jean-Jacque Rousseau. The second chapter offers a new reading of the relation between the universal law and humanity formulas by relating both of these to a third formula of Kant’s, the “Law of Nature” formula, and also to Kant’s ideas about laws in general and human nature in particular. The third chapter considers and rejects some influential recent attempts to understand Kant’s argument for the humanity formula, and offers an alternative reconstruction instead. Chapter four considers what it is to flourish as a human being in line with Kant’s basic formulas of morality, and argues that the standard readings of the humanity formula cannot properly account for its relation to Kant’s views about the highest human good.PHDPhilosophyUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/93978/1/nyholm_1.pd

    Employers have a Duty of Beneficence to Design for Meaningful Work:A General Argument and Logistics Warehouses as a Case Study

    Get PDF
    Artificial intelligence-driven technology increasingly shapes work practices and, accordingly, employees’ opportunities for meaningful work (MW). In our paper, we identify five dimensions of MW: pursuing a purpose, social relationships, exercising skills and self-development, autonomy, self-esteem and recognition. Because MW is an important good, lacking opportunities for MW is a serious disadvantage. Therefore, we need to know to what extent employers have a duty to provide this good to their employees. We hold that employers have a duty of beneficence to design for opportunities for MW when implementing AI-technology in the workplace. We argue that this duty of beneficence is supported by the three major ethical theories, namely, Kantian ethics, consequentialism, and virtue ethics. We defend this duty against two objections, including the view that it is incompatible with the shareholder theory of the firm. We then employ the five dimensions of MW as our analytical lens to investigate how AI-based technological innovation in logistic warehouses has an impact, both positively and negatively, on MW, and illustrate that design for MW is feasible. We further support this practical feasibility with the help of insights from organizational psychology. We end by discussing how AI-based technology has an impact both on meaningful work (often seen as an aspirational goal) and decent work (generally seen as a matter of justice). Accordingly, ethical reflection on meaningful and decent work should become more integrated to do justice to how AI-technology inevitably shapes both simultaneously.</p

    The Technological Future of Love

    Get PDF
    How might emerging and future technologies—sex robots, love drugs, anti-love drugs, or algorithms to track, quantify, and ‘gamify’ romantic relationships—change how we understand and value love? We canvass some of the main ethical worries posed by such technologies, while also considering whether there are reasons for “cautious optimism” about their implications for our lives. Along the way, we touch on some key ideas from the philosophies of love and technology
    • 

    corecore